-
Notifications
You must be signed in to change notification settings - Fork 8.4k
feat: add IBM Agentics component #11468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughIntroduces an Agentics component enabling AI-driven data transformation via language models. Supports multiple transduction types (amap, areduce, generate) with configurable metadata tracking and multi-provider LLM support. Adds corresponding package dependency. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Agentics
participant LLM as Language Model
participant DataFrame as Data Processing
User->>Agentics: Call transduce() with config
Agentics->>Agentics: Instantiate/configure LLM<br/>(IBM WatsonX or Google)
Agentics->>DataFrame: Convert source to AG format
Agentics->>DataFrame: Build target Pydantic model<br/>from schema
alt Transduction Type: generate
Agentics->>LLM: Generate prototypical instances
LLM-->>Agentics: Generated data
else Transduction Type: amap/areduce
Agentics->>LLM: Execute transduction<br/>(amap/areduce)
LLM-->>Agentics: Transformed data
alt merge_source enabled
Agentics->>DataFrame: Merge outputs<br/>with source
end
end
Agentics->>DataFrame: Convert AG output to DataFrame
Agentics->>Agentics: Add metadata if enabled<br/>(success/error/stats)
Agentics-->>User: Return transformed DataFrame
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 3❌ Failed checks (1 error, 2 warnings)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@src/backend/base/langflow/components/agentics/agentics.py`:
- Around line 31-36: Fix the metadata on the Agentics component class: correct
the typo in the description attribute from "amongs" to "among" and update the
documentation string to include the protocol prefix (e.g.,
"https://github.com/IBM/agentics/"); verify the display_name and icon attributes
(Agentics, "List") remain unchanged. Ensure the changes are made on the Agentics
class attributes: display_name, description, documentation, icon.
- Around line 31-36: The Agentics component class (Agentics) lives in a
non-standard subdirectory (agentics/) and the package __init__.py is empty;
either move the Agentics class file into an existing category (preferably
processing/ or agents/) and remove the agentics/ folder, or if you intend to
keep agentics/ as a new category, populate agentics/__init__.py to export and
forward the component (import Agentics and any helpers and expose them via
__all__) and add any package-level registration logic required by the component
loader; update imports throughout the codebase to reference the new location or
package export accordingly.
- Around line 252-253: The method currently constructs df =
output.to_dataframe() and returns df.to_dict(orient="records") while its
signature declares -> DataFrame; fix by either returning the DataFrame object
(return df) to match the DataFrame return type, or change the function's return
type annotation to list[dict] (or List[Dict[str, Any]]) to reflect
df.to_dict(orient="records"); update the signature accordingly and adjust any
downstream callers/types if you choose the latter.
- Around line 146-178: Remove the three dead helper methods _format_row_as_toml,
_create_base_row, and _add_metadata from the Agentics class: these methods are
never invoked (not used by transduce) and reference undefined attributes
(self.output_column_name, self.enable_metadata), so delete their definitions to
avoid dead code and invalid attribute references; ensure no other part of the
class or tests call these method names before removing.
- Around line 184-220: The model-instantiation block in Agentics must validate
and exhaustively handle providers and return typed errors: first check that
self.model is a non-empty list (avoid IndexError on self.model[0]) and raise a
clear ValueError if empty; use get_model_classes() to resolve
metadata.get("model_class") as you already do, then add explicit branches for
the other supported providers (OpenAI, Anthropic, Ollama) and instantiate the
correct LLM (or equivalent client) with the proper parameters (use
get_api_key_for_provider(self.user_id, provider, self.api_key) for API keys and
self.get_project_name() where needed); never return a string on failure—raise a
descriptive exception (fix the typo “paramters”) for unsupported/invalid
provider or missing API key so downstream consumers get a consistent error type.
- Around line 180-248: In transduce(), llm can remain None when self.model is
not a list; before creating target = AG(..., llm=llm) detect and handle
already-instantiated models by checking self.model's type/interface (e.g.,
isinstance(self.model, LLM) or a known Agentic wrapper) and set llm =
self.model, or if self.model is a dict/object that needs instantiation,
construct the LLM similarly to the list branch; if no valid model is available
raise a clear ValueError. Update the logic around the existing llm variable in
transduce (refer to the llm local, the isinstance(self.model, list) branch, and
the AG(...) instantiation) so llm is always a valid model or a clear error is
thrown before calling AG.
🧹 Nitpick comments (3)
src/backend/base/langflow/components/agentics/agentics.py (2)
77-77: Address TODO comment before merge.The TODO comment suggests the default value should be removed. Please either remove the default or remove the TODO if intentionally keeping it.
Do you want me to open an issue to track this task?
222-225: Remove commented-out debug code.These commented lines appear to be debug artifacts and should be removed before merge.
Suggested fix
- # print("AAAAAA" , type(self.source)) - - # if isinstance(self.source, list): - source = AG.from_dataframe(DataFrame(self.source))src/backend/base/pyproject.toml (1)
20-20: Consider moving agentics-py to optional dependencies.The alpha version (0.2.2a1) is a pre-release; the latest stable version is 0.2.1. Since agentics-py is only used by the Agentics component (isolated to
src/backend/base/langflow/components/agentics/), moving it to[project.optional-dependencies]would prevent all users from installing an unstable version unnecessarily. The>=0.2.2a1constraint is also too permissive for a pre-release—consider pinning to a tighter range if keeping it in core dependencies.
| class Agentics(Component): | ||
| display_name = "Agentics" | ||
| description = "Enables Map Reduce Style Agentic data transformations amongs dataframes" | ||
| documentation: str = "github.com/IBM/agentics/" | ||
| icon = "List" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor issues in class metadata.
- Typo in description: "amongs" → "among"
- Documentation URL missing protocol prefix
Suggested fix
class Agentics(Component):
display_name = "Agentics"
- description = "Enables Map Reduce Style Agentic data transformations amongs dataframes"
- documentation: str = "github.com/IBM/agentics/"
+ description = "Enables Map Reduce Style Agentic data transformations among dataframes"
+ documentation: str = "https://github.com/IBM/agentics/"
icon = "List"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class Agentics(Component): | |
| display_name = "Agentics" | |
| description = "Enables Map Reduce Style Agentic data transformations amongs dataframes" | |
| documentation: str = "github.com/IBM/agentics/" | |
| icon = "List" | |
| class Agentics(Component): | |
| display_name = "Agentics" | |
| description = "Enables Map Reduce Style Agentic data transformations among dataframes" | |
| documentation: str = "https://github.com/IBM/agentics/" | |
| icon = "List" | |
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 31 -
36, Fix the metadata on the Agentics component class: correct the typo in the
description attribute from "amongs" to "among" and update the documentation
string to include the protocol prefix (e.g.,
"https://github.com/IBM/agentics/"); verify the display_name and icon attributes
(Agentics, "List") remain unchanged. Ensure the changes are made on the Agentics
class attributes: display_name, description, documentation, icon.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if __init__.py exists in agentics subdirectory
echo "=== Checking agentics/__init__.py ==="
if [ -f "src/backend/base/langflow/components/agentics/__init__.py" ]; then
cat -n src/backend/base/langflow/components/agentics/__init__.py
else
echo "No __init__.py found in agentics/"
fi
echo ""
echo "=== Checking parent components/__init__.py for agentics import ==="
rg "agentics" src/backend/base/langflow/components/__init__.py || echo "No agentics import found"
echo ""
echo "=== Listing all subdirectories in components ==="
fd -t d -d 1 . src/backend/base/langflow/components/ | sortRepository: langflow-ai/langflow
Length of output: 394
🏁 Script executed:
#!/bin/bash
# Verify agentics.py file exists
echo "=== Checking if agentics.py exists ==="
ls -la src/backend/base/langflow/components/agentics/
echo ""
echo "=== Checking knowledge_bases/__init__.py ==="
if [ -f "src/backend/base/langflow/components/knowledge_bases/__init__.py" ]; then
cat -n src/backend/base/langflow/components/knowledge_bases/__init__.py
else
echo "No __init__.py found"
fi
echo ""
echo "=== Checking if knowledge_bases is in parent __init__.py ==="
rg "knowledge_bases" src/backend/base/langflow/components/__init__.py || echo "No knowledge_bases import found"
echo ""
echo "=== Checking processing/__init__.py for structure reference ==="
head -20 src/backend/base/langflow/components/processing/__init__.pyRepository: langflow-ai/langflow
Length of output: 3872
🏁 Script executed:
#!/bin/bash
# Check the Agentics class definition
echo "=== Agentics component structure ==="
head -50 src/backend/base/langflow/components/agentics/agentics.py | tail -40
echo ""
echo "=== Example: processing/__init__.py pattern ==="
cat src/backend/base/langflow/components/processing/__init__.py
echo ""
echo "=== Check data/ subdirectory __init__.py if exists ==="
if [ -f "src/backend/base/langflow/components/data/__init__.py" ]; then
cat src/backend/base/langflow/components/data/__init__.py
else
echo "data/ directory does not exist"
fiRepository: langflow-ai/langflow
Length of output: 1294
🏁 Script executed:
#!/bin/bash
# Check the main components __init__.py
echo "=== Main components/__init__.py ==="
cat src/backend/base/langflow/components/__init__.py | head -100
echo ""
echo "=== Check if there's dynamic discovery pattern ==="
rg "import_module|importlib" src/backend/base/langflow/components/__init__.py -A 3 | head -30Repository: langflow-ai/langflow
Length of output: 598
Component directory placement and module configuration incomplete.
The agentics/ subdirectory is placed outside the documented structure. Coding guidelines specify new components should use: agents/, data/, embeddings/, input_output/, models/, processing/, prompts/, tools/, or vectorstores/.
Additionally, agentics/__init__.py is empty. If agentics/ is intentional as a new category (like knowledge_bases/), the __init__.py must be configured with component imports and forwarding logic. Otherwise, move the component to processing/ or agents/ and remove the agentics/ subdirectory.
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 31 -
36, The Agentics component class (Agentics) lives in a non-standard subdirectory
(agentics/) and the package __init__.py is empty; either move the Agentics class
file into an existing category (preferably processing/ or agents/) and remove
the agentics/ folder, or if you intend to keep agentics/ as a new category,
populate agentics/__init__.py to export and forward the component (import
Agentics and any helpers and expose them via __all__) and add any package-level
registration logic required by the component loader; update imports throughout
the codebase to reference the new location or package export accordingly.
| def _format_row_as_toml(self, row: dict[str, Any]) -> str: | ||
| """Convert a dictionary (row) into a TOML-formatted string.""" | ||
| formatted_dict = {str(col): {"value": str(val)} for col, val in row.items()} | ||
| return toml.dumps(formatted_dict) | ||
|
|
||
| def _create_base_row( | ||
| self, original_row: dict[str, Any], model_response: str = "", batch_index: int = -1 | ||
| ) -> dict[str, Any]: | ||
| """Create a base row with original columns and additional metadata.""" | ||
| row = original_row.copy() | ||
| row[self.output_column_name] = model_response | ||
| row["batch_index"] = batch_index | ||
| return row | ||
|
|
||
| def _add_metadata( | ||
| self, row: dict[str, Any], *, success: bool = True, system_msg: str = "", error: str | None = None | ||
| ) -> None: | ||
| """Add metadata to a row if enabled.""" | ||
| if not self.enable_metadata: | ||
| return | ||
|
|
||
| if success: | ||
| row["metadata"] = { | ||
| "has_system_message": bool(system_msg), | ||
| "input_length": len(row.get("text_input", "")), | ||
| "response_length": len(row[self.output_column_name]), | ||
| "processing_status": "success", | ||
| } | ||
| else: | ||
| row["metadata"] = { | ||
| "error": error, | ||
| "processing_status": "failed", | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Get file info and read the agentics.py file
wc -l src/backend/base/langflow/components/agentics/agentics.pyRepository: langflow-ai/langflow
Length of output: 125
🏁 Script executed:
# Read the file to understand the class structure and these methods
cat -n src/backend/base/langflow/components/agentics/agentics.pyRepository: langflow-ai/langflow
Length of output: 11154
🏁 Script executed:
# Search for usages of these helper methods across the entire codebase
rg -n '_format_row_as_toml|_create_base_row|_add_metadata' --type=pyRepository: langflow-ai/langflow
Length of output: 2740
Remove unused helper methods that reference undefined attributes.
These methods are dead code that should not exist in this component:
_format_row_as_toml(line 146): Never called intransduce()_create_base_row(line 151): Never called; references undefinedself.output_column_name_add_metadata(line 160): Never called; references undefinedself.enable_metadataandself.output_column_name
The Agentics class has no attributes for output_column_name or enable_metadata in its inputs. Remove these methods unless they will be actively used in the component's logic.
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 146 -
178, Remove the three dead helper methods _format_row_as_toml, _create_base_row,
and _add_metadata from the Agentics class: these methods are never invoked (not
used by transduce) and reference undefined attributes (self.output_column_name,
self.enable_metadata), so delete their definitions to avoid dead code and
invalid attribute references; ensure no other part of the class or tests call
these method names before removing.
| async def transduce(self) -> DataFrame: | ||
| """Process each row in df[column_name] with the language model asynchronously.""" | ||
| # Check if model is already an instance (for testing) or needs to be instantiated | ||
| llm = None | ||
| if isinstance(self.model, list): | ||
| # Extract model configuration | ||
| model_selection = self.model[0] | ||
| model_name = model_selection.get("name") | ||
| provider = model_selection.get("provider") | ||
| metadata = model_selection.get("metadata", {}) | ||
|
|
||
| # Get model class and parameters from metadata | ||
| model_class = get_model_classes().get(metadata.get("model_class")) | ||
| if model_class is None: | ||
| msg = f"No model class defined for {model_name}" | ||
| raise ValueError(msg) | ||
|
|
||
| # Get API key from global variables | ||
| from lfx.base.models.unified_models import get_api_key_for_provider | ||
|
|
||
| api_key = get_api_key_for_provider(self.user_id, provider, self.api_key) | ||
|
|
||
| if not api_key and provider != "Ollama": | ||
| msg = f"{provider} API key is required. Please configure it globally." | ||
| raise ValueError(msg) | ||
|
|
||
| if provider == "IBM WatsonX": | ||
| llm = LLM( | ||
| model="watsonx/" + model_name, | ||
| base_url="https://us-south.ml.cloud.ibm.com", | ||
| project_id=self.get_project_name(), | ||
| api_key=api_key, | ||
| temperature=0, | ||
| max_tokens=4000, | ||
| max_input_tokens=100000, | ||
| ) | ||
| elif provider == "Google Generative AI": | ||
| llm = LLM(model="gemini/" + model_name, api_key=api_key) | ||
|
|
||
| else: | ||
| return "Please fix model paramters" | ||
|
|
||
| # print("AAAAAA" , type(self.source)) | ||
|
|
||
| # if isinstance(self.source, list): | ||
|
|
||
| source = AG.from_dataframe(DataFrame(self.source)) | ||
| schema_fields = [ | ||
| ( | ||
| field["name"], | ||
| field["description"], | ||
| field["type"] if not field["multiple"] else f"list[{field['type']}]", | ||
| False, | ||
| ) | ||
| for field in self.schema | ||
| ] | ||
| atype = create_pydantic_model(schema_fields, name=self.atype_name) | ||
| if self.transduction_type == "generate": | ||
| output_states = await generate_prototypical_instances(atype, n_instances=self.batch_size) | ||
| output = AG(states=output_states) | ||
| else: | ||
| target = AG( | ||
| atype=atype, | ||
| instructions=self.instructions, | ||
| transduction_type=self.transduction_type, | ||
| amap_batch_size=self.batch_size, | ||
| llm=llm, | ||
| ) | ||
| output = await (target << source) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
llm remains None when self.model is not a list.
If the model is provided via a connected handle (not a list selection), the if isinstance(self.model, list) block is skipped, leaving llm = None. This None value is then passed to AG() at line 246, which may cause failures depending on how AG handles a missing LLM.
Consider handling the case where self.model is an already-instantiated model object:
Suggested approach
async def transduce(self) -> DataFrame:
"""Process each row in df[column_name] with the language model asynchronously."""
- # Check if model is already an instance (for testing) or needs to be instantiated
llm = None
if isinstance(self.model, list):
# ... existing list handling ...
+ elif self.model is not None:
+ # Model provided via handle connection - use directly or wrap appropriately
+ llm = self.model # Or wrap: LLM(model=self.model) if needed
+ else:
+ msg = "No model provided"
+ raise ValueError(msg)🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 180 -
248, In transduce(), llm can remain None when self.model is not a list; before
creating target = AG(..., llm=llm) detect and handle already-instantiated models
by checking self.model's type/interface (e.g., isinstance(self.model, LLM) or a
known Agentic wrapper) and set llm = self.model, or if self.model is a
dict/object that needs instantiation, construct the LLM similarly to the list
branch; if no valid model is available raise a clear ValueError. Update the
logic around the existing llm variable in transduce (refer to the llm local, the
isinstance(self.model, list) branch, and the AG(...) instantiation) so llm is
always a valid model or a clear error is thrown before calling AG.
| if isinstance(self.model, list): | ||
| # Extract model configuration | ||
| model_selection = self.model[0] | ||
| model_name = model_selection.get("name") | ||
| provider = model_selection.get("provider") | ||
| metadata = model_selection.get("metadata", {}) | ||
|
|
||
| # Get model class and parameters from metadata | ||
| model_class = get_model_classes().get(metadata.get("model_class")) | ||
| if model_class is None: | ||
| msg = f"No model class defined for {model_name}" | ||
| raise ValueError(msg) | ||
|
|
||
| # Get API key from global variables | ||
| from lfx.base.models.unified_models import get_api_key_for_provider | ||
|
|
||
| api_key = get_api_key_for_provider(self.user_id, provider, self.api_key) | ||
|
|
||
| if not api_key and provider != "Ollama": | ||
| msg = f"{provider} API key is required. Please configure it globally." | ||
| raise ValueError(msg) | ||
|
|
||
| if provider == "IBM WatsonX": | ||
| llm = LLM( | ||
| model="watsonx/" + model_name, | ||
| base_url="https://us-south.ml.cloud.ibm.com", | ||
| project_id=self.get_project_name(), | ||
| api_key=api_key, | ||
| temperature=0, | ||
| max_tokens=4000, | ||
| max_input_tokens=100000, | ||
| ) | ||
| elif provider == "Google Generative AI": | ||
| llm = LLM(model="gemini/" + model_name, api_key=api_key) | ||
|
|
||
| else: | ||
| return "Please fix model paramters" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Incomplete provider handling and type-unsafe error return.
Several issues in the model instantiation logic:
- Line 186:
self.model[0]can raiseIndexErrorif the list is empty - Lines 206-220: Only handles "IBM WatsonX" and "Google Generative AI", but
get_model_classes()supports OpenAI, Anthropic, and Ollama as well - Line 220: Returns a string
"Please fix model paramters"(typo: "paramters") instead of raising an exception—this violates theDataFramereturn type and causes silent failures downstream
Suggested fix
async def transduce(self) -> DataFrame:
"""Process each row in df[column_name] with the language model asynchronously."""
llm = None
if isinstance(self.model, list):
+ if not self.model:
+ msg = "No model selected"
+ raise ValueError(msg)
# Extract model configuration
model_selection = self.model[0]
model_name = model_selection.get("name")
provider = model_selection.get("provider")
metadata = model_selection.get("metadata", {})
# Get model class and parameters from metadata
model_class = get_model_classes().get(metadata.get("model_class"))
if model_class is None:
msg = f"No model class defined for {model_name}"
raise ValueError(msg)
# Get API key from global variables
from lfx.base.models.unified_models import get_api_key_for_provider
api_key = get_api_key_for_provider(self.user_id, provider, self.api_key)
if not api_key and provider != "Ollama":
msg = f"{provider} API key is required. Please configure it globally."
raise ValueError(msg)
if provider == "IBM WatsonX":
llm = LLM(
model="watsonx/" + model_name,
base_url="https://us-south.ml.cloud.ibm.com",
project_id=self.get_project_name(),
api_key=api_key,
temperature=0,
max_tokens=4000,
max_input_tokens=100000,
)
elif provider == "Google Generative AI":
llm = LLM(model="gemini/" + model_name, api_key=api_key)
-
+ elif provider == "OpenAI":
+ llm = LLM(model="openai/" + model_name, api_key=api_key)
+ elif provider == "Anthropic":
+ llm = LLM(model="anthropic/" + model_name, api_key=api_key)
+ elif provider == "Ollama":
+ llm = LLM(model="ollama/" + model_name)
else:
- return "Please fix model paramters"
+ msg = f"Unsupported provider: {provider}"
+ raise ValueError(msg)🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 184 -
220, The model-instantiation block in Agentics must validate and exhaustively
handle providers and return typed errors: first check that self.model is a
non-empty list (avoid IndexError on self.model[0]) and raise a clear ValueError
if empty; use get_model_classes() to resolve metadata.get("model_class") as you
already do, then add explicit branches for the other supported providers
(OpenAI, Anthropic, Ollama) and instantiate the correct LLM (or equivalent
client) with the proper parameters (use get_api_key_for_provider(self.user_id,
provider, self.api_key) for API keys and self.get_project_name() where needed);
never return a string on failure—raise a descriptive exception (fix the typo
“paramters”) for unsupported/invalid provider or missing API key so downstream
consumers get a consistent error type.
| df = output.to_dataframe() | ||
| return df.to_dict(orient="records") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return type mismatch: returns list[dict] instead of DataFrame.
The method signature declares -> DataFrame, but df.to_dict(orient="records") returns list[dict]. Either update the return type annotation or return the DataFrame directly.
Option 1: Return DataFrame directly
df = output.to_dataframe()
- return df.to_dict(orient="records")
+ return dfOption 2: Update return type annotation
- async def transduce(self) -> DataFrame:
+ async def transduce(self) -> list[dict[str, Any]]:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| df = output.to_dataframe() | |
| return df.to_dict(orient="records") | |
| df = output.to_dataframe() | |
| return df |
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 252 -
253, The method currently constructs df = output.to_dataframe() and returns
df.to_dict(orient="records") while its signature declares -> DataFrame; fix by
either returning the DataFrame object (return df) to match the DataFrame return
type, or change the function's return type annotation to list[dict] (or
List[Dict[str, Any]]) to reflect df.to_dict(orient="records"); update the
signature accordingly and adjust any downstream callers/types if you choose the
latter.
Released new component https://github.com/gliozzo/langflow/tree/agentics_boundle_llm_fix/src/backend/base/langflow/components/agentics
Need to figure out how to integrate the requested agentics-py module
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.